Goto

Collaborating Authors

 facial recognition company


Why the Prospect of the IRS Using Facial Recognition Is So Alarming

Slate

The U.S. Internal Revenue Service is planning to require citizens to create accounts with a private facial recognition company in order to file taxes online. The IRS is joining a growing number of federal and state agencies that have contracted with ID.me to authenticate the identities of people accessing services. The IRS's move is aimed at cutting down on identity theft, a crime that affects millions of Americans. The IRS, in particular, has reported a number of tax filings from people claiming to be others, and fraud in many of the programs that were administered as part of the American Relief Plan has been a major concern to the government. The IRS decision has prompted a backlash, in part over concerns about requiring citizens to use facial recognition technology and in part over difficulties some people have had in using the system, particularly with some state agencies that provide unemployment benefits.


Facial Recognition company, Clearview AI asked to 'clear' its stolen data - TechStory

#artificialintelligence

On the other hand, Clearview Ai company has decided to maintain its innocence and defende that their business wasn't concerned with Australia and has no Australian druggies. The CEO of the New York grounded company, Hoan Ton-That, the practice was n't illegal and was done keeping all the laws and regulation in mind. He cleared that the facial images which were scrapped of the multitudinous social media platforms similar as LinkedIn, Facebook, Instagram, and others, were the bones available on open internet and didn't violate any law. H indeed expressed his disheartening and disappointment over the fact that he respects the country, its citizens and the officers who spent time and energy on the inquiry, but his technology was misinterpreted and devaluated.


Despite controversies and bans, facial recognition startups are flush with VC cash – TechCrunch

#artificialintelligence

If efforts by states and cities to pass privacy regulations curbing the use of facial recognition are anything to go by, you might fear the worst for the companies building the technology. But a recent influx of investor cash suggests the facial recognition startup sector is thriving, not suffering. Facial recognition is one of the most controversial and complex policy areas in play. The technology can be used to track where you go and what you do. It's used by public authorities and in private businesses like stores.


'Racist' facial recognition sparks ethical concerns in Russia

#artificialintelligence

TBILISI, July 5 (Thomson Reuters Foundation) - (Editor's note: contains offensive language and terms of racial abuse) From scanning residents' faces to let them into their building to spotting police suspects in a crowd, the rise of facial recognition is accompanied by a growing chorus of concern about unethical uses of the technology. A report published on Monday by U.S.-based researchers showing that Russian facial recognition companies have built tools to detect a person's race has raised fears among digital rights groups, who describe the technology as "purpose-made for discrimination." Developer guides and code examples unearthed by video surveillance research firm IPVM show software advertised by four of Russia's biggest facial analytics firms can use artificial intelligence (AI) to classify faces based on their perceived ethnicity or race. There is no indication yet that Russian police have targeted minorities using the software developed by the firms - AxxonSoft, Tevian, VisionLabs and NtechLab - whose products are sold to authorities and businesses in the country and abroad. But Moscow-based AxxonSoft said the Thomson Reuters Foundation's enquiry prompted it to disable its ethnicity analytics feature, saying in an emailed response it was not interested "in promoting any technologies that could be a basis for ethnic segregation".


A Facial Recognition Company's First Amendment Theory Threatens Privacy--and Free Speech

#artificialintelligence

What could be one of the most consequential First Amendment cases of the digital age is pending before a court in Illinois and will likely be argued before the end of the year. The case concerns Clearview AI, the technology company that surreptitiously scraped 3 billion images from the internet to feed a facial recognition app it sold to law enforcement agencies. Now confronting multiple lawsuits based on an Illinois privacy law, the company has retained Floyd Abrams, the prominent First Amendment litigator, to argue that its business activities are constitutionally protected. Landing Abrams was a coup for Clearview, but whether anyone else should be celebrating is less clear. A First Amendment that shielded Clearview and other technology companies from reasonable privacy regulation would be bad for privacy, obviously, but it would be bad for free speech, too.


A Facial Recognition Company's First Amendment Theory Threatens Privacy--and Free Speech

Slate

This article is part of the Free Speech Project, a collaboration between Future Tense and the Tech, Law, & Security Program at American University Washington College of Law that examines the ways technology is influencing how we think about speech. What could be one of the most consequential First Amendment cases of the digital age is pending before a court in Illinois and will likely be argued before the end of the year. The case concerns Clearview AI, the technology company that surreptitiously scraped 3 billion images from the internet to feed a facial recognition app it sold to law enforcement agencies. Now confronting multiple lawsuits based on an Illinois privacy law, the company has retained Floyd Abrams, the prominent First Amendment litigator, to argue that its business activities are constitutionally protected. Landing Abrams was a coup for Clearview, but whether anyone else should be celebrating is less clear.


Facial recognition designed to detect around face masks is failing, study finds

#artificialintelligence

Algorithms designed specifically for face masks are getting stumped, researchers found. Many facial recognition companies have claimed they can identify people with pinpoint accuracy even while they're wearing face masks, but the latest results from a study show that the coverings are dramatically increasing error rates. In an update Tuesday, the US National Institute of Standards and Technology looked at 41 facial recognition algorithms submitted after the COVID-19 pandemic was declared in mid-March. Many of these algorithms were designed with face masks in mind, and claimed that they were still able to accurately identify people, even when half of their face was covered. Keep track of the coronavirus pandemic.


Facial recognition company that scrapes social media sites to be investigated by UK and Australia

The Independent - Tech

The UK's Information Commissioner's Office and the Australian Information Commissioner have announced a joint investigation into Clearview AI. The data watchdogs will focus "on the company's use of'scraped' data and biometrics of individuals" they said in a statement. The investigation follows a similar announcement by the Office of the Privacy Commissioner of Canada, which also has opened an investigation into Clearview AI. "The joint investigation was initiated in the wake of media reports which stated that Clearview AI was using its technology to collect images and make facial recognition available to law enforcement in the context of investigations" the Canadian statement says. "Reports have also indicated the US-based company provides services in a number of countries to a broad range of organizations, including retailers, financial institutions and various government institutions." The company had advised the privacy protection authorities that, in response to their investigation, it would be withdrawing its services from Canada.


Russia's facial recognition system 'Orwell' to monitor schools

Daily Mail - Science & tech

Tens of thousands of Russian schools will soon use a facial recognition technology called'Orwell' to monitor children and teachers during school hours. According to a report from the Russian business newspaper, Vedomosti, the systems will be introduced to 43,000 schools across the country and are currently already being used in 1,600. Elvees Neotech, the company behind'Orwell' says that the technology is designed for'automatic detection and classification of targets' which includes identifying people as well as'situations.' NtechLab, a facial recognition company, will support the Orwell system. On the company's website, Elvees Neotech says that the software is capable of identifying crowds of people, when targets have crossed a preset boundary, and even specific license plates.


Even facial recognition supporters say the tech won't stop school shootings

#artificialintelligence

After a school shooting in Parkland, Florida left 17 people dead, RealNetworks decided to make its facial recognition technology available for free to schools across the US and Canada. If school officials could detect strangers on their campuses, they might be able to stop shooters before they got to a classroom. Anxious to keep children safe from gun violence, thousands of schools reached out with interest in the technology. Dozens started using SAFR, RealNetworks' facial recognition technology. From working with schools, RealNetworks, the streaming media company, says it's learned an important lesson: Facial recognition isn't likely an effective tool for preventing shootings.